Joint Algorithm-Architecture Optimization of CABAC
نویسندگان
چکیده
This paper uses joint optimization of both the algorithm and architecture to enable high coding efficiency in conjunction with high processing speed and low area cost. Specifically, it presents several optimizations that can be performed on Context Adaptive Binary Arithmetic Coding (CABAC), a form of entropy coding used in H.264/AVC, to achieve the throughput necessary for real-time low power high definition video coding. The combination of syntax element partitions and interleaved entropy slices, referred to as Massively Parallel CABAC, increases the number of binary symbols that can be processing in a cycle. Subinterval reordering is used to reduce the cycle time required to process each binary symbol. Under common conditions using the JM12.0 software, the Massively Parallel CABAC, increases the bins per cycle by 2.7 to 32.8x at a cost of 0.25% to 6.84% coding loss compared with sequential single slice H.264/AVC CABAC. It also provides a 2x reduction in area cost, and reduces memory bandwidth. Subinterval reordering reduces critical path by 14% to 22%, while modifications to context selection reduces memory requirement by 67%. This work illustrates that accounting for implementation cost during video coding algorithms design can enable higher processing speed and reduce hardware cost, while still delivering high coding efficiency in the next generation video coding standard. This work was funded by Texas Instruments. Chip fabrication was provided by Texas Instruments. The work of V. Sze was supported by the Texas Instruments Graduate Women’s Fellowship for Leadership in Microelectronics and NSERC. V. Sze, A. P. Chandrakasan Microsystems Technology Laboratories, Massachusetts Institute of Technology, Cambridge, MA 02139, USA (e-mail: [email protected]; [email protected])
منابع مشابه
Design and Implementation of a High-Throughput CABAC Hardware Accelerator for the HEVC Decoder
HEVC is the new video coding standard of the Joint Collaborative Team on Video Coding. As in its predecessor H.264/AVC, Context-based Adaptive Binary Arithmetic Coding (CABAC) is a throughput bottleneck. This paper presents a hardware acceleration approach for transform coefficient decoding, the most time consuming part of CABAC in HEVC. In addition to a baseline design, a pipelined architectur...
متن کاملParallel algorithms and architectures for low power video decoding
Parallelism coupled with voltage scaling is an effective approach to achieve high processing performance with low power consumption. This thesis presents parallel architectures and algorithms designed to deliver the power and performance required for current and next generation video coding. Coding efficiency, area cost and scalability are also addressed. First, a low power video decoder is pre...
متن کاملJoint inversion of ReMi dispersion curves and refraction travel times using particle swarm optimization algorithm
Shear-wave velocity ( ) is an important parameter used for site characterization in geotechnical engineering. However, dispersion curve inversion is challenging for most inversion methods due to its high non-linearity and mix-determined trait. In order to overcome these problems, in this study, a joint inversion strategy is proposed based on the particle swarm optimization (PSO) algorithm. The ...
متن کاملSurvey of advanced CABAC accelarator architectures for future multimedia
The future high quality multimedia systems require efficient video coding algorithms and corresponding adaptive high-performance computational platforms. In this paper, we survey the hardware accelerator architectures for Context-based Adaptive Binary Arithmetic Coding (CABAC) of H.264/AVC. The purpose of the survey is to deliver a critical insight in the proposed solutions, and this way facili...
متن کاملA VLSI Architecture for High Performance CABAC Encoding
One key technique for improving the coding efficiency of H.264 video standard is the entropy coder, contextadaptive binary arithmetic coder (CABAC). However the complexity of the encoding process of CABAC is significantly higher than the table driven entropy encoding schemes such as the Huffman coding. CABAC is also bit serial and its multi-bit parallelization is extremely difficult. For a high...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Signal Processing Systems
دوره 69 شماره
صفحات -
تاریخ انتشار 2012